-
-
Notifications
You must be signed in to change notification settings - Fork 354
feat(mcp): first version of MCP client #2549
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Fantastic, thank you @cairijun. The holidays start from Monday for me, so I will get around to properly testing this out, then. But this will be priority 1 for me. |
olimorris
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Firstly, this is awesome and I'm very excited about this making its way into CodeCompanion.
I've done a first pass review. Nothing major, lots of nitpicking. Then I'll give it some proper testing over the holiday period.
Giving some thoughts to the UX, I think the distinction between MCP "tools" and regular tools, the former should be visible in the chat buffer as @{mcp.runPython}. It makes it easier for users to filter via their completion plugin and makes the distinction clear.
Seeing how you've implemented the MCP servers:
sequentialThinking = {
cmd = { "npx", "-y", "@modelcontextprotocol/server-sequential-thinking" },
tool_overrides = {
sequentialthinking = {
output = {
success = function(self, tools, cmd, stdout)
local output = stdout and stdout[#stdout]
local msg = "Sequential thinking: " .. self.args.thought
tools.chat:add_tool_output(self, output, msg)
end,
},
},
},
},Is brilliant 👏🏼. The tool_bridge.lua file is an excellent idea. Ideally, we wouldn't have sequentialthinking appear twice in the same config block but this can be tackled towards the end of the PR.
Seeing success = function(self, tools, cmd, stdout)...this is on me to refactor. I'm trying to move everything over to success = function(self, args) pattern as it's future proof should we add additional arguments. I realized this pattern far too late on. So heads up, this may change in the future.
Once again, thanks for this brilliant PR.
| local M = {} | ||
|
|
||
| ---Default tool output callbacks | ||
| local DefaultOutputCallbacks = {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be snake case.
Also, using Default implies that there is an alternative for these callbacks. Something other than the defaults`.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We indeed have alternatives. The user can override some of the callbacks via tool_overrides.{TOOL_NAME}.output config table. I changed it to local default_output = { ... }.
968ca4a to
163c909
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR implements a basic MCP (Model Context Protocol) client for CodeCompanion, enabling integration with MCP servers to expose their tools as CodeCompanion tools. The implementation supports stdio transport, tool pagination, roots capability, and customizable tool behavior.
Key changes:
- Core MCP client with JSON-RPC communication over stdio transport
- Tool bridge to convert MCP tools into CodeCompanion tool format with grouping by server
- Comprehensive test coverage with mock transport for unit and integration testing
Reviewed changes
Copilot reviewed 12 out of 12 changed files in this pull request and generated 12 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/stubs/mcp/tools.jsonl | Test fixtures containing sample MCP tool definitions for testing |
| tests/mocks/mcp_client_transport.lua | Mock transport implementation enabling testable JSON-RPC communication without external processes |
| tests/mcp/test_mcp_client.lua | Unit tests for MCP client initialization, tool loading, tool calls, timeouts, and roots capability |
| tests/interactions/chat/mcp/test_mcp_tools.lua | Integration tests verifying MCP tools work within chat interactions with tool overrides |
| tests/config.lua | Adds empty MCP config section to test configuration |
| lua/codecompanion/types.lua | Type definitions for MCP protocol structures (JSON-RPC, tools, content blocks) |
| lua/codecompanion/mcp/tool_bridge.lua | Converts MCP tool specifications to CodeCompanion tools with customizable output handlers |
| lua/codecompanion/mcp/init.lua | Module entry point that starts all configured MCP servers on first chat creation |
| lua/codecompanion/mcp/client.lua | Main client implementation handling transport, JSON-RPC, initialization, and tool operations |
| lua/codecompanion/interactions/chat/tools/init.lua | Updates JSON decode to handle null values properly for MCP tool arguments |
| lua/codecompanion/interactions/chat/init.lua | Integrates MCP startup into chat initialization flow |
| lua/codecompanion/config.lua | Adds default MCP configuration with empty servers table |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
Hi @olimorris , thanks for your detailed and constructive comments! I've been a bit busy on some personal matters recently so I'm replying slower than usual. I've push some commits addressing most of the comments. The generated tool groups are now named There's also a major change that I moved the output formatting logic from |
47946fa to
2eac686
Compare
|
No worries @cairijun and no rush. I too have been in the same boat. We'll definitely get this PR into CodeCompanion 👍🏼. I will be picking this back up next week too. |
|
I've got some time planned for this weekend to take a look at this. Looking forward to it. |
olimorris
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initial reaction:
This is awesome. Really, really awesome. The fact all I needed to do to get this working, was:
mcp = {
servers = {
["tavily-mcp"] = {
cmd = { "npx", "-y", "tavily-mcp@latest" },
env = {
TAVILY_API_KEY = "cmd:op read op://personal/Tavily_API/credential --no-newline",
},
},
},
},...is tremendous.
I've put some inline comments. Pretty much minor stuff now. I will write the docs and add some more test coverage (think we might need a bit of coverage around mcp servers appearing in the completion list).
There's a couple of (hopefully) small hurdles I've noticed:
- In my example above, I'm being prompted by 1Password everytime I create a new chat buffer. I may not want to add the Tavily MCP server to this chat buffer so think this prompt should be when I've pressed submit to send the response to the LLM.
- It can be hit-and-miss if the MCP server appears in the completion list in the chat buffer. I notice when I open a fresh Neovim, create a chat buffer, I often don't see the MCP servers. If I open a second chat buffer, I do see them. I think we may need to shift the logic of creating the MCP groups to the
providers.completion.initfile like we do for all other tools and groups.
Otherwise, this is amazing and I can't wait to merge this into CodeCompanion. No rush and take your time. We all have lives outside of open-source. And, if you need me to pickup any bits, just say.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a couple of LSP warnings in this file that are worth addressing.
| @@ -0,0 +1,194 @@ | |||
| local log = require("codecompanion.utils.log") | |||
|
|
|||
| local CONSTANTS = { | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've extracted some messaging to the top of the file as I'm likely to tweak this over time. Also, preference is to have @{mcp:tavily} rather than @{mcp.tavily} as I think it visually clarifies tavily is part of the MCP group (super minor nitpick).
| table.insert(server_prompt, server_instructions) | ||
| end | ||
|
|
||
| chat_tools.groups[fmt("%s%s", CONSTANTS.TOOL_PREFIX, client.name)] = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've noticed that MCP tools appearing in the completion menu in the chat buffer to be temperamental. I'm assuming there is a timing difference between when these are resolved in the chat buffer and when the providers.completion.init file resolves them. I do think that logic should be moved to the latter file

Description
As discussed in #2506, we want a basic MCP client implemented in CodeCompanion.
Core design:
MCP.Clients are started on first chat creation.@{mcpServerName}to grant tool access to the LLM.Features:
mcp-remoteexperimentalin the Speclog:infoConfig example:
Related Issue(s)
#2506
Screenshots
Checklist
make allto ensure docs are generated, tests pass and my formatting is appliedCodeCompanion.hasin the init.lua file for my new feature